200 research outputs found

    Spatial context-aware person-following for a domestic robot

    Get PDF
    Domestic robots are in the focus of research in terms of service providers in households and even as robotic companion that share the living space with humans. A major capability of mobile domestic robots that is joint exploration of space. One challenge to deal with this task is how could we let the robots move in space in reasonable, socially acceptable ways so that it will support interaction and communication as a part of the joint exploration. As a step towards this challenge, we have developed a context-aware following behav- ior considering these social aspects and applied these together with a multi-modal person-tracking method to switch between three basic following approaches, namely direction-following, path-following and parallel-following. These are derived from the observation of human-human following schemes and are activated depending on the current spatial context (e.g. free space) and the relative position of the interacting human. A combination of the elementary behaviors is performed in real time with our mobile robot in different environments. First experimental results are provided to demonstrate the practicability of the proposed approach

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies

    Long-Term Simulation of Dynamic, Interactive Worlds with MORSE

    Get PDF
    In this talk I present recent work in the context of the European Integrated Project STRANDS (Spatio-temporal representations for Cognitive Control in Long-term Activities, http://strands-project.eu) that employ the MORSE simulator. In particular, I present three application domains to facilitate long-term simulation of dynamic worlds within that project. First, MORSE is employed in a continuous integration and testing framework helping to ensure high code quality, not only on compilation levels, but also on a level of system integration and deployment. MORSE is being used to run system level tests defined for the Jenkins continuous integration platform, enabling STRANDS to maintain a high-level of code consistency required to successfully participate in events such as the Robot Marathon. Secondly, I outline the use of MORSE, and particularly its flexible build scripts, to automatically generate worlds from Qualitative Spatial Relations (QSR). These allow to define a world qualitatively on the level of builder scripts with defined probability distribution of metric object positions and orientations. In STRANDS, this ability is exploited to generated randomised world in a controlled way, both to study the formation of QSRs and to generate randomised worlds for the before-mentioned testing framework. Finally, I present our work on Human-Robot Spatial Interaction and our preliminary efforts to simulate crowds in a robotic simulator. This final contribution is mostly work in progress with implementation in MORSE still pending. But initial results have been obtained to simulate thousands of agents in a simulated airport environment by using a hierarchical representation of vector maps to generate individual trajectories for agents. This work will lead to a more realistic simulation of human-inhabited environment and is use in STRANDS' research on human-robot spatial interaction

    The use of synchrony in parent-child interaction can be measured on a signal-level

    Get PDF
    In our approach, we aim at an objective measurement of synchrony in multimodal tutoring behavior. The use of signal correlation provides a well formalized method that yields gradual information about the degree of synchrony. For our analysis, we used and extended an algorithm proposed by Hershey & Movellan (2000) that correlates single-pixel values of a video signal with the loudness of the corresponding audio track over time. The results of all pixels are integrated over the video to achieve a scalar estimate of synchrony

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts

    Feature and viewpoint selection for industrial car assembly

    Get PDF
    Abstract. Quality assurance programs of today’s car manufacturers show increasing demand for automated visual inspection tasks. A typical example is just-in-time checking of assemblies along production lines. Since high throughput must be achieved, object recognition and pose estimation heavily rely on offline preprocessing stages of available CAD data. In this paper, we propose a complete, universal framework for CAD model feature extraction and entropy index based viewpoint selection that is developed in cooperation with a major german car manufacturer

    Who am I talking with? A face memory for social robots

    Get PDF
    In order to provide personalized services and to develop human-like interaction capabilities robots need to rec- ognize their human partner. Face recognition has been studied in the past decade exhaustively in the context of security systems and with significant progress on huge datasets. However, these capabilities are not in focus when it comes to social interaction situations. Humans are able to remember people seen for a short moment in time and apply this knowledge directly in their engagement in conversation. In order to equip a robot with capabilities to recall human interlocutors and to provide user- aware services, we adopt human-human interaction schemes to propose a face memory on the basis of active appearance models integrated with the active memory architecture. This paper presents the concept of the interactive face memory, the applied recognition algorithms, and their embedding into the robot’s system architecture. Performance measures are discussed for general face databases as well as scenario-specific datasets

    Active vision-based localization for robots in a home-tour scenario

    Get PDF
    Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned

    The when, where, and how: an adaptive robotic info-terminal for care home residents – a long-term study

    Get PDF
    Adapting to users' intentions is a key requirement for autonomous robots in general, and in care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users' needs. Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only then when and where are relevant, but also the way how the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting. Our results back our hypotheses, (i) that learning a spatiotemporal model of users' intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered
    corecore